Wiener filter

In signal processing, the Wiener filter is a filter proposed by Norbert Wiener during the 1940s and published in 1949.[1] Its purpose is to reduce the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal. The discrete-time equivalent of Wiener's work was derived independently by Kolmogorov and published in 1941. Hence the theory is often called the Wiener-Kolmogorov filtering theory. The Wiener-Kolmogorov was the first statistically designed filter to be proposed and subsequently gave rise to many others including the famous Kalman filter. A Wiener filter is not an adaptive filter because the theory behind this filter assumes that the inputs are stationary.[2]

Contents

Description

The goal of the Wiener filter is to filter out noise that has corrupted a signal. It is based on a statistical approach.

Typical filters are designed for a desired frequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:[3]

  1. Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and cross-correlation
  2. Requirement: the filter must be physically realizable/causal (this requirement can be dropped, resulting in a non-causal solution)
  3. Performance criterion: minimum mean-square error (MMSE)

This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.

Wiener filter problem setup

The input to the Wiener filter is assumed to be a signal, \scriptstyle s(t), corrupted by additive noise, \scriptstyle n(t). The output, \scriptstyle\hat{s}(t), is calculated by means of a filter, \scriptstyle g(t), using the following convolution:[3]

\hat{s}(t) = g(t) * [s(t) %2B n(t)]

where

The error is defined as

e(t) = s(t %2B \alpha) - \hat{s}(t)

where

In other words, the error is the difference between the estimated signal and the true signal shifted by \scriptstyle\alpha.

The squared error is

e^2(t) = s^2(t %2B \alpha) - 2s(t %2B \alpha)\hat{s}(t) %2B \hat{s}^2(t)

where

Depending on the value of \scriptstyle\alpha, the problem can be described as follows:

Writing \scriptstyle\hat{s}(t) as a convolution integral:

\hat{s}(t) = \int\limits_{-\infty}^{\infty}{g(\tau)\left[s(t - \tau) %2B n(t - \tau)\right]\,d\tau}.

Taking the expected value of the squared error results in

E(e^2) = R_s(0) - 2\int\limits_{-\infty}^{\infty}{g(\tau)R_{xs}(\tau %2B \alpha)\,d\tau} %2B \iint\limits^{[\infty, \infty]}_{[-\infty, -\infty]}{g(\tau)g(\theta)R_x(\tau - \theta)\,d\tau\,d\theta}

where

If the signal \scriptstyle s(t) and the noise \scriptstyle n(t) are uncorrelated (i.e., the cross-correlation \scriptstyle R_{sn} is zero), then this means that

For many applications, the assumption of uncorrelated signal and noise is reasonable.

The goal is to minimize \scriptstyle E(e^2), the expected value of the squared error, by finding the optimal \scriptstyle g(\tau), the Wiener filter impulse response function. The minimum may be found by calculating the first order incremental change in the least square error resulting from an incremental change in g(.) for positive time. This is

 \delta E(e^2) = -2 \int\limits_{0}^{\infty}{\delta g(\tau)(R_{xs}(\tau %2B \alpha)- \int\limits_{-\infty}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta})} d\tau

For a minimum, this must vanish identically for all \scriptstyle \delta g(t) which leads to the Wiener-Hopf equation

 R_{xs}(\tau %2B \alpha) = \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved by a special technique due to Wiener and Hopf.

Wiener filter solutions

The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book Levinson gave the FIR solution.

Noncausal solution

G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s}.

Provided that \scriptstyle g(t) is optimal, then the minimum mean-square error equation reduces to

E(e^2) = R_s(0) - \int_{-\infty}^{\infty}{g(\tau)R_{x,s}(\tau %2B \alpha)\,d\tau},

and the solution \scriptstyle g(t) is the inverse two-sided Laplace transform of \scriptstyle G(s).

Causal solution

G(s) = \frac{H(s)}{S_x^{%2B}(s)},

where

This general formula is complicated and deserves a more detailed explanation. To write down the solution \scriptstyle G(s) in a specific case, one should follow these steps:[4]

  1. Start with the spectrum \scriptstyle S_x(s) in rational form and factor it into causal and anti-causal components:
S_x(s) = S_x^{%2B}(s) S_x^{-}(s)

where \scriptstyle S^{%2B} contains all the zeros and poles in the left hand plane (LHP) and \scriptstyle S^{-} contains the zeroes and poles in the right hand plane (RHP). This is called the Wiener–Hopf factorization.

  1. Divide \scriptstyle S_{x,s}(s)e^{\alpha s} by \scriptstyle S_x^{-}(s) and write out the result as a partial fraction expansion.
  2. Select only those terms in this expansion having poles in the LHP. Call these terms \scriptstyle H(s).
  3. Divide \scriptstyle H(s) by \scriptstyle S_x^{%2B}(s). The result is the desired filter transfer function \scriptstyle G(s).

Finite Impulse Response Wiener filter for discrete series

The causal finite impulse response (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V).

In order to derive the coefficients of the Wiener filter, consider the signal w[n] being fed to a Wiener filter of order N and with coefficients \scriptstyle \{a_i\}, \scriptstyle i \,=\, 0,\, \ldots,\, N. The output of the filter is denoted x[n] which is given by the expression

x[n] = \sum_{i=0}^N a_i w[n-i] .

The residual error is denoted e[n] and is defined as e[n] = x[n] − s[n] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows:

a_i = \arg \min ~E\{e^2[n]\} ,

where \scriptstyle E\{\cdot\} denotes the expectation operator. In the general case, the coefficients \scriptstyle a_i may be complex and may be derived for the case where w[n] and s[n] are complex as well. With a complex signal, the matrix to be solved is a Hermitian Toeplitz matrix, rather than Symmetric Toeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as:


\begin{array}{rcl}
E\{e^2[n]\} &=& E\{(x[n]-s[n])^2\}\\
&=& E\{x^2[n]\} %2B E\{s^2[n]\} - 2E\{x[n]s[n]\}\\
&=& E\{\big( \sum_{i=0}^N a_i w[n-i] \big)^2\} %2B E\{s^2[n]\} - 2E\{\sum_{i=0}^N a_i w[n-i]s[n]\} .
\end{array}

To find the vector \scriptstyle [a_0,\, \ldots,\, a_N] which minimizes the expression above, calculate its derivative with respect to \scriptstyle a_i


\begin{array}{rcl}
\frac{\partial}{\partial a_i} E\{e^2[n]\} &=& 2E\{ \big( \sum_{j=0}^N a_j w[n-j] \big) w[n-i] \} - 2E\{s[n]w[n-i]\} \quad i=0,\, \ldots,\, N\\
&=& 2 \sum_{j=0}^N E\{w[n-j]w[n-i]\} a_j - 2E\{ w[n-i]s[n]\} .
\end{array}

Assuming that w[n] and s[n] are each stationary and jointly stationary, the sequences \scriptstyle R_w[m] and \scriptstyle R_{ws}[m] known respectively as the autocorrelation of w[n] and the cross-correlation between w[n] and s[n] can be defined as follows:


\begin{align}
R_w[m] =& E\{w[n]w[n%2Bm]\} \\
R_{ws}[m] =& E\{w[n]s[n%2Bm]\} .
\end{align}

The derivative of the MSE may therefore be rewritten as (notice that \scriptstyle R_{ws}[-i] \,=\, R_{sw}[i])

\frac{\partial}{\partial a_i} E\{e^2[n]\} = 2 \sum_{j=0}^{N} R_w[j-i] a_j - 2 R_{sw}[i] \quad i = 0,\, \ldots,\, N .

Letting the derivative be equal to zero results in

\sum_{j=0}^N R_w[j-i] a_j = R_{sw}[i] \quad i = 0,\, \ldots,\, N ,

which can be rewritten in matrix form

\begin{align}
&\mathbf{T}\mathbf{a} = \mathbf{v}\\

\Rightarrow
&\begin{bmatrix}
R_w[0] & R_w[1] & \cdots & R_w[N] \\
R_w[1] & R_w[0] & \cdots & R_w[N-1] \\
\vdots & \vdots & \ddots & \vdots \\
R_w[N] & R_w[N-1] & \cdots & R_w[0]
\end{bmatrix}

\begin{bmatrix}
a_0 \\ a_1 \\ \vdots \\ a_N
\end{bmatrix}

=

\begin{bmatrix}
R_{sw}[0] \\R_{sw}[1] \\ \vdots \\ R_{sw}[N]
\end{bmatrix}

\end{align}

These equations are known as the Wiener-Hopf equations. The matrix T appearing in the equation is a symmetric Toeplitz matrix. These matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, \scriptstyle\mathbf{a} \,=\, \mathbf{T}^{-1}\mathbf{v}. Furthermore, there exists an efficient algorithm to solve such Wiener-Hopf equations known as the Levinson-Durbin algorithm so an explicit inversion of \scriptstyle\mathbf{T} is not required.

Relationship to the least mean squares filter

The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix \scriptstyle\mathbf{X} and output vector \scriptstyle\mathbf{y} is

\boldsymbol{\hat\beta} = (\mathbf{X} ^\mathbf{T}\mathbf{X})^{-1}\mathbf{X}^{\mathbf{T}}\boldsymbol y .

The FIR Wiener filter is related to the least mean squares filter, but minimizing its error criterion does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.

See also

References

  1. ^ Wiener, Norbert (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series. New York: Wiley. ISBN 0-262-73005-7. 
  2. ^ Standard Mathematical Tables and Formulae (30 ed.). CRC Press, Inc. 1996. pp. 660–661. ISBN 0-8493-2479-3. 
  3. ^ a b Brown, Robert Grover; Hwang, Patrick Y.C. (1996). Introduction to Random Signals and Applied Kalman Filtering (3 ed.). New York: John Wiley & Sons. ISBN 0-471-12839-2. 
  4. ^ Welch, Lloyd R. "Wiener-Hopf Theory". http://csi.usc.edu/PDF/wienerhopf.pdf. 

External links